Goto

Collaborating Authors

 diverse population



Model Zoos: A Dataset of Diverse Populations of Neural Network Models

Neural Information Processing Systems

In the last years, neural networks (NN) have evolved from laboratory environments to the state-of-the-art for many real-world problems. It was shown that NN models (i.e., their weights and biases) evolve on unique trajectories in weight space during training. Following, a population of such neural network models (referred to as model zoo) would form structures in weight space. We think that the geometry, curvature and smoothness of these structures contain information about the state of training and can reveal latent properties of individual models. With such model zoos, one could investigate novel approaches for (i) model analysis, (ii) discover unknown learning dynamics, (iii) learn rich representations of such populations, or (iv) exploit the model zoos for generative modelling of NN weights and biases.


Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning

Neural Information Processing Systems

Reinforcement Learning from Human Feedback (RLHF) is a powerful paradigm for aligning foundation models to human values and preferences. When these differences arise, traditional RLHF frameworks simply average over them, leading to inaccurate rewards and poor performance for individual subgroups. To address the need for pluralistic alignment, we develop a class of multimodal RLHF methods. Our proposed techniques are based on a latent variable formulation - inferring a novel user-specific latent and learning reward models and policies conditioned on this latent without additional user-specific data. While conceptually simple, we show that in practice, this reward modeling requires careful algorithmic considerations around model architecture and reward scaling.


Model Zoos: A Dataset of Diverse Populations of Neural Network Models

Neural Information Processing Systems

In the last years, neural networks (NN) have evolved from laboratory environments to the state-of-the-art for many real-world problems. It was shown that NN models (i.e., their weights and biases) evolve on unique trajectories in weight space during training. Following, a population of such neural network models (referred to as model zoo) would form structures in weight space. We think that the geometry, curvature and smoothness of these structures contain information about the state of training and can reveal latent properties of individual models. With such model zoos, one could investigate novel approaches for (i) model analysis, (ii) discover unknown learning dynamics, (iii) learn rich representations of such populations, or (iv) exploit the model zoos for generative modelling of NN weights and biases.


Avoiding bias and increasing diversity in AI and health research - Part 1 - Bristows

#artificialintelligence

This article is part 1 of our bias in AI series, an update to the original article in our Biotech Review of the year – issue 8. Read part 2 here. During the COVID-19 pandemic, the notion of different health outcomes for different populations has gained increased profile in the public consciousness, particularly in light of the varying effect of COVID-19 on different community groups. Varying outcomes can arise for a variety of reasons, one of which is bias (whether conscious or unconscious) in the healthcare system. But surely this isn't something that needs to be considered in relation to AI in health research, as AI systems are inanimate and can't display human faults…right? There is often a misconception that medical devices and AI systems can't produce biased results, as they work using logic and process, rather than being tainted by flawed assumptions based on human error or prejudice. However, ultimately it is humans that design medical devices, which are tested on human collected datasets.


IBM scientists hope to detect early signs of dementia using AI

#artificialintelligence

Researchers from IBM and Pfizer have published details on a new AI model that interprets written speech, which they claim can predict whether a person will develop Alzheimer's seven years before they show symptoms. The idea is attractive for its simplicity: The model's only input is a written sample from the "cookie-theft picture description task," a common cognitive test that asks participants to describe what's happening in a drawing (three guesses what the drawing is of). Researchers trained the AI to pore over participants' responses, picking up on hints of cognitive decline like repetition, misspellings, two-word sentences, and limited vocabulary. Now, hold your applause: The model is in early days, and it isn't any better than current cognitive assessments. The initial study--based on data gathered from just 270 Americans over the course of four decades--showed the AI could predict a future Alzheimer's diagnosis 70% of the time.


Challenges in Supporting Exploratory Search through Voice Assistants

Ma, Xiao, Liu, Ariel

arXiv.org Artificial Intelligence

Voice assistants have been successfully adopted for simple, routine tasks, such as asking for the weather or setting an alarm. However, as people get more familiar with voice assistants, they may increase their expectations for more complex tasks, such as exploratory search-- e.g., "What should I do when I visit Paris with kids? Oh, and ideally not too expensive." Compared to simple search tasks such as "How tall is the Eiffel Tower?", which can be answered with a single-shot answer, the response to exploratory search is more nuanced, especially through voice-based assistants. In this paper, we outline four challenges in designing voice assistants that can better support exploratory search: addressing situationally induced impairments; working with mixed-modal interactions; designing for diverse populations; and meeting users' expectations and gaining their trust. Addressing these challenges is important for developing more "intelligent" voice-based personal assistants.